Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: address a couple of old TODOs #98823

Merged
merged 1 commit into from
Mar 21, 2023

Conversation

yuzefovich
Copy link
Member

This commit addresses a couple of old TODOs in the connExecutor.dispatchToExecutionEngine method.

First, it simply removes the now-stale TODO about needing to Step() the txn before executing the query. The TODO mentioned things about the old way FK checks were performed, but we now always run checks in a deferred fashion, and perform the Steps correctly there for cascades and checks, thus, the TODO can be removed. (The TODO itself explicitly mentions that it can be removed once we do checks and cascades in the deferred fashion.)

Second, it addresses a TODO about how some plan information is saved. Up until 961e66f the cleanup of planNode tree and flow infra was intertwined, so in 6ae4881 in order to "sample" the plan with correct info (some of which is only available after the execution is done) we delegated that sampling to planTop.close. However, with planNode tree and flow cleanups refactored and de-coupled, we no longer have to close the plan early on. As a result, we now close the plan in a defer right after we create it (now it's the only place we do so), and this commit makes it so that we record the plan info explicitly right after returning from the execution engine, thus, addressing the second TODO.

Epic: None

Release note: None

@yuzefovich yuzefovich requested review from knz, a team and cucaroach March 16, 2023 23:25
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Member Author

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @cucaroach and @knz)


pkg/sql/conn_executor_exec.go line 691 at r1 (raw file):

	// change would forget to add the call).
	//
	// TODO(andrei): really the code should be rearchitected to ensure

IIUC this whole comment as well as TODO are still applicable, so I didn't change anything about it. Also, IIUC the unconditional Step below is still required.

@knz I'm hoping you'd be ok reviewing this patch and confirming my understanding.

Copy link
Contributor

@knz knz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 6 of 6 files at r1, all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @cucaroach and @yuzefovich)


pkg/sql/conn_executor_exec.go line 691 at r1 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

IIUC this whole comment as well as TODO are still applicable, so I didn't change anything about it. Also, IIUC the unconditional Step below is still required.

@knz I'm hoping you'd be ok reviewing this patch and confirming my understanding.

sounds about right.


pkg/sql/conn_executor_exec.go line 1355 at r1 (raw file):

		ctx, planner, stmt.AST.StatementReturnType(), res, distribute, progAtomic,
	)
	planner.curPlan.savePlanInfo()

Shouldn't this be in a defer around the call to execWithDistSQLEngine, to handle the case where the call panics? With the previous implementation, savePlanInfo() was called by plan.close during panics.

This commit addresses a couple of old TODOs in the
`connExecutor.dispatchToExecutionEngine` method.

First, it simply removes the now-stale TODO about needing to `Step()`
the txn before executing the query. The TODO mentioned things about the
old way FK checks were performed, but we now always run checks in
a deferred fashion, and perform the `Step`s correctly there for cascades
and checks, thus, the TODO can be removed. (The TODO itself explicitly
mentions that it can be removed once we do checks and cascades in the
deferred fashion.)

Second, it addresses a TODO about how some plan information is saved. Up
until 961e66f the cleanup of `planNode`
tree and `flow` infra was intertwined, so in 6ae4881
in order to "sample" the plan with correct info (some of which is only
available _after_ the execution is done) we delegated that sampling to
`planTop.close`. However, with `planNode` tree and `flow` cleanups
refactored and de-coupled, we no longer have to close the plan early on.
As a result, we now close the plan in a defer right after we create it
(now it's the only place we do so), and this commit makes it so that we
record the plan info explicitly right after returning from the execution
engine, thus, addressing the second TODO.

Epic: None

Release note: None
Copy link
Member Author

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @cucaroach and @knz)


pkg/sql/conn_executor_exec.go line 1355 at r1 (raw file):

Previously, knz (Raphael 'kena' Poss) wrote…

Shouldn't this be in a defer around the call to execWithDistSQLEngine, to handle the case where the call panics? With the previous implementation, savePlanInfo() was called by plan.close during panics.

Done.

(Although if the panic were to occur in execWithDistSQLEngine, it wouldn't matter whether we stored this plan info or not - that plan info IIUC is only used in recordStatementSummary, and that method wouldn't execute in case of a panic, but I agree that it's cleaner to just defer it inside of execWithDistSQLEngine. The context for these changes is simplifying the code to make it easier to implement the multiple active portals feature.)

@yuzefovich
Copy link
Member Author

@knz do I have your approval on this? I want #96358 to be rebased on top of it.

Copy link
Contributor

@knz knz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 1 files at r2, all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @cucaroach)

@yuzefovich
Copy link
Member Author

TFTR!

bors r+

Copy link
Contributor

@cucaroach cucaroach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 4 of 6 files at r1, 1 of 1 files at r2.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @yuzefovich)


pkg/sql/conn_executor.go line 2766 at r2 (raw file):

			// execInsertPlan
			func(ctx context.Context, p *planner, res RestrictedCommandResult) error {
				defer p.curPlan.close(ctx)

How important is this? Should it be backported? If copy wanted to scrape post execution stats or anything would the plan need to still be open? Wondering if this should go in insertRowsInternal.


pkg/sql/distsql_running.go line 1552 at r2 (raw file):

	evalCtxFactory func(usedConcurrently bool) *extendedEvalContext,
) error {
	defer planner.curPlan.close(ctx)

Okay this was being called for copy so probably no backport necessary.

Copy link
Member Author

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @cucaroach)


pkg/sql/distsql_running.go line 1552 at r2 (raw file):

Previously, cucaroach (Tommy Reilly) wrote…

Okay this was being called for copy so probably no backport necessary.

Yeah, this PR lifts this curPlan.close call a few layers up, so it doesn't fix any bug but is, instead, a "philosophical cleanup".

@craig
Copy link
Contributor

craig bot commented Mar 21, 2023

Build succeeded:

@craig craig bot merged commit 18e6641 into cockroachdb:master Mar 21, 2023
@yuzefovich yuzefovich deleted the conn-exec-todos branch March 21, 2023 19:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants